Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Real-time face recognition on ARM platform based on deep learning
FANG Guokang, LI Jun, WANG Yaoru
Journal of Computer Applications    2019, 39 (8): 2217-2222.   DOI: 10.11772/j.issn.1001-9081.2019010164
Abstract1341)      PDF (958KB)(604)       Save
Aiming at the problem of low real-time performance of face recognition and low face recognition rate on ARM platform, a real-time face recognition method based on deep learning was proposed. Firstly, an algorithm for detecting and tracking faces in real time was designed based on MTCNN face detection algorithm. Then, a face feature extraction network was designed based on Residual Neural Network (ResNet) on ARM platform. Finally, according to the characteristics of ARM platform, Mali-GPU was used to accelerate the operation of face feature extraction network, sharing the CPU load and improving the overall running efficiency of the system. The algorithm was deployed on ARM-based Rockchip development board, and the running speed reaches 22 frames per second. Experimental results show that the recognition rate of this method is 11 percentage points higher than that of MobileFaceNet on MegaFace.
Reference | Related Articles | Metrics
Human skeleton key point detection method based on OpenPose-slim model
WANG Jianbing, LI Jun
Journal of Computer Applications    2019, 39 (12): 3503-3509.   DOI: 10.11772/j.issn.1001-9081.2019050954
Abstract656)      PDF (1060KB)(398)       Save
The OpenPose model originally used for the detection of key points in human skeleton can greatly shorten the detection cycle while maintaining the accuracy of the Regional Multi-Person Pose Estimation (RMPE) model and the Mask Region-based Convolutional Neural Network (R-CNN) model, which were proposed in 2017 and had the near-optimal detection effect at that time. At the same time, the OpenPose model has the problems such as low parameter sharing rate, high redundancy, long time-consuming and too large model scale. In order to solve the problems, a new OpenPose-slim model was proposed. In the proposed model, the network width was reduced, the number of convolution block layers was decreased, the original parallel structure was changed into sequential structure and the Dense connection mechanism was added to the inner module. The processing process was mainly divided into three modules:1) the position coordinates of human skeleton key points were detected in the key point localization module; 2) the key point positions were connected to the limb in the key point association module; 3) limb matching was performed to obtain the contour of human body in the limb matching module. There is a close correlation between processing stages. The experimental results on the MPII dataset, Common Objects in COntext (COCO) dataset and AI Challenger dataset show that, the use of four localization modules and two association modules as well as the use of Dense connection mechanism inside each module of the proposed model is the best structure. Compared with the OpenPose model, the test cycle of the proposed model is shortened to nearly 1/6, the parameter size is reduced by nearly 50%, and the model size is reduced to nearly 1/27.
Reference | Related Articles | Metrics
Self-training method based on semi-supervised clustering and data editing
LYU Jia, LI Junnan
Journal of Computer Applications    2018, 38 (1): 110-115.   DOI: 10.11772/j.issn.1001-9081.2017071721
Abstract439)      PDF (885KB)(282)       Save
According to the problem that unlabeled samples of high confidence selected by self-training method contain less information in each iteration and self-training method is easy to mislabel unlabeled samples, a Naive Bayes self-training method based on semi-supervised clustering and data editing was proposed. Firstly, semi-supervised clustering was used to classify a small number of labeled samples and a large number of unlabeled samples, and the unlabeled samples with high membership were chosen, then they were classified by Naive Bayes. Secondly, the data editing technique was used to filter out unlabeled samples with high clustering membership which were misclassified by Naive Bayes. The data editing technique could filter noise by utilizing information of the labeled samples and unlabeled samples, solving the problem that performance of traditional data editing technique may be decreased due to lack of labeled samples. The effectiveness of the proposed algorithm was verified by comparative experiments on UCI datasets.
Reference | Related Articles | Metrics
Image denoising model with adaptive non-local data-fidelity term and bilateral total variation
GUO Li, LIAO Yu, LI Min, YUAN Hailin, LI Jun
Journal of Computer Applications    2017, 37 (8): 2334-2342.   DOI: 10.11772/j.issn.1001-9081.2017.08.2334
Abstract578)      PDF (1659KB)(657)       Save
Aiming at the problems of over-smoothing, singular structure residual noise, contrast loss and stair effect of common denoising methods, an image denoising model with adaptive non-local data fidelity and bilateral total variation regularization was proposed, which provides an adaptive non-local regularization energy function and the corresponding variation framework. Firstly, the data fidelity term was obtained by non-local means filter with adaptive weighting method. Secondly, the bilateral total variation regularization was introduced in this framework, and a regularization factor was used to balance the data fidelity term and the regularization term. At last, the optimal solutions for different noise statistics were obtained by minimizing the energy function, so as to achieve the purpose of reducing residual noise and correcting excessive smoothing. The theoretical analysis and simulation results on simulated noise images and real noise images show that the proposed image denoising model can deal with different statistical noise in image, and the Peak-Signal-to-Noise Ratio (PSNR) of it can be increased by up to 0.6 dB when compared with the adaptive non-local means filter; when compared with the total variation regularization algorithm, the subjective visual effect of the proposed model was improved obviously and the details of image texture and edges was protected very well when denoising, and the PSNR was increased by up to 10 dB, the Multi-Scale Structural Similarity index (MS-SSIM) was increased by 0.3. Therefore, the proposed denoising model can theoretically better deal with the noise and the high frequency detail information of the image, and has good practical application value in the fields of video and image resolution enhancement.
Reference | Related Articles | Metrics
Image restoration based on natural patch likelihood and sparse prior
LI Junshan, YANG Yawei, ZHU Zijiang, ZHANG Jiao
Journal of Computer Applications    2017, 37 (8): 2319-2323.   DOI: 10.11772/j.issn.1001-9081.2017.08.2319
Abstract541)      PDF (898KB)(751)       Save
Concerning the problem that images captured by optical system suffer unsteady degradation including noise, blurring and geometric distortion when imaging process is affected by defocusing, motion, atmospheric disturbance and photoelectric noise, a generic framework of image restoration based on natural patch likelihood and sparse prior was proposed. Firstly, on the basis of natural image sparse prior model, several patch likelihood models were compared. The results indicate that the image patch likelihood model can improve the restoration performance. Secondly, the image expected patch log likelihood model was constructed and optimized, which reduced the running time and simplified the learning process. Finally, image restoration based on optimized expected log likelihood and Gaussian Mixture Model (GMM) was accomplished through the approximate Maximum A Posteriori (MAP) algorithm. The experimental results show that the proposed approach can restore degraded images by kinds of blur and additive noise, and its performance outperforms the state-of-the-art image restoration methods based on sparse prior in both Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) with a better visual effect.
Reference | Related Articles | Metrics
Research of control plane' anti-attacking in software-defined network based on Byzantine fault-tolerance
GAO Jie, WU Jiangxing, HU Yuxiang, LI Junfei
Journal of Computer Applications    2017, 37 (8): 2281-2286.   DOI: 10.11772/j.issn.1001-9081.2017.08.2281
Abstract510)      PDF (941KB)(684)       Save
Great convenience has been brought by the centralized control plane of Software-Defined Network (SDN), but a lot of security risks have been introduced into it as well. In the light of single point failure, unknown vulnerabilities and back doors, static configuration and other security problems of the controller, a secure architecture for SDN based on Byzantine protocol was proposed, in which the Byzantine protocol was executed between controllers and each switching device was controlled by a controller view and control messages were decided by several controllers. Furthermore, the dynamics and heterogeneity were introduced into the proposed structure, so that the attack chain was broken and the capabilities of network active defense were enhanced; moreover, based on the quantification of the controller heterogeneity, a two-stage algorithm was designed to seek for the controller view, so that the availability of the network and the security of the controller view were ensured. Simulation results show that compared with the traditional structure, the proposed structure is more resistant to attacks.
Reference | Related Articles | Metrics
Sweep coverage optimization algorithm for mobile sensor node with limited sensing
SHEN Xianhao, LI Jun, NAI He
Journal of Computer Applications    2017, 37 (1): 60-64.   DOI: 10.11772/j.issn.1001-9081.2017.01.0060
Abstract504)      PDF (917KB)(429)       Save
In the applications of mobile Wireless Sensor Network (WSN), since the sensing range of the sensor nodes is limited, the coverage analysis is a scan coverage problem for the target area. In this paper, a new scan coverage algorithm based on multi-objective optimization was proposed. In the target area, the double objective optimization strategy was used on path planning for a single mobile sensor node, which could maximize the coverage of the node and make scan coverage path to the shortest. Simulation experiments were carried out under the conditions with obstacles and without obstacles. Compared with the formation coverage algorithm for multiple nodes, the proposed algorithm can significantly reduce the mobile energy consumption while moderately reducing coverage rate.
Reference | Related Articles | Metrics
Real-time alert correlation approach based on attack planning graph
ZHANG Jing, LI Xiaopeng, WANG Hengjun, LI Junquan, YU Bin
Journal of Computer Applications    2016, 36 (6): 1538-1543.   DOI: 10.11772/j.issn.1001-9081.2016.06.1538
Abstract442)      PDF (840KB)(353)       Save
The alert correlation approach based causal relationship has the problems that it cannot be able to process massive alerts in time and the attack scenario graphs split. In order to solve the problem, a novel real-time alert correlation approach based on Attack Planning Graph (APG) was proposed. Firstly, the definition of APG and Attack Planning Tree (APT) were presented. The real-time alert correlation algorithm based on APG was proposed by creating APG model on basis of priori knowledge to reconstruct attack scenario. And then, the attack scenario was completed and the attack was predicted by applying alert inference mechanism. The experimental results show that, the proposed approach is effective in processing massive alerts and rebuilding attack scenarios with better performance in terms of real-time. The proposed approach can be applied to analyze intrusion attack intention and guide intrusion responses.
Reference | Related Articles | Metrics
Particle swarm optimization algorithm based on multi-strategy synergy
LI Jun, WANG Chong, LI Bo, FANG Guokang
Journal of Computer Applications    2016, 36 (3): 681-686.   DOI: 10.11772/j.issn.1001-9081.2016.03.681
Abstract599)      PDF (820KB)(539)       Save
Aiming at the shortage that Particle Swarm Optimization (PSO) algorithm is easy to fall into local optima and has low precision at later evolution process, a modified Multi-Strategies synergy PSO (MSPSO) algorithm was proposed. Firstly, a probability threshold value of 0.3 was set. In every iteration, if the randomly generated probability value was less than the threshold, the algorithm with opposition-based learning for the best individual was adopted to generate their opposite solutions, which improved the convergence speed and precision of PSO; otherwise, Gaussian mutation strategy was adopted for the particle position to enhance the diversity of population. Secondly, a Cauchy mutation strategy for linearly decreasing cauchy distribution scale parameter decreased was proposed, to generate better solution to guide the particle to approximate the optimum space. Finally, the simulation experiments were conducted on eight benchmark functions. MSPSO algorithm has the convergence mean value of 1.68E+01, 2.36E-283, 8.88E-16, 2.78E-05, 8.88E-16, respectively in Rosenbrock, Schwefel's P2.22, Rotated Ackley, Quadric Noise and Ackley, and can converge to the optimal solution of 0 in Sphere, Griewank and Rastrigin, which is better than GDPSO (PSO based on Gaussian Disturbance) and GOPSO (PSO based on global best Cauchy mutation and Opposition-based learning). The results show that proposed algorithm has higher convergence accuracy and can effectively avoid being trapped in local optimal solution.
Reference | Related Articles | Metrics
Semi-supervised extreme learning machine and its application in analysis of near-infrared spectroscopy data
JING Shibo, YANG Liming, LI Junhui, ZHANG Siyun
Journal of Computer Applications    2016, 36 (2): 387-391.   DOI: 10.11772/j.issn.1001-9081.2016.02.0387
Abstract487)      PDF (729KB)(933)       Save
When insufficient training information is available, supervised Extreme Learning Machine (ELM) is difficult to use. Thus applying semi-supervised learning to ELM, a Semi-Supervised ELM (SSELM) framework was proposed. However, it is difficult to find the optimal solution of SSELM due to its nonconvexity and nonsmoothness. Using combinatorial optimization method, SSELM was solved by reformulating SSELM as a linear mixed integer program. Furthermore, SSELM was used for the direct recognition of medicine and seeds datasets using Near-InfraRed spectroscopy (NIR) technology. Compared with the traditional ELM methods, the experimental results show that SSELM can improve the generation when insufficient training information is available, which indicates the feasibility and effectiveness of the proposed method.
Reference | Related Articles | Metrics
Adaptive residual error correction support vector regression prediction algorithm based on phase space reconstruction
LI Junshan, TONG Qi, YE Xia, XU Yuan
Journal of Computer Applications    2016, 36 (11): 3229-3233.   DOI: 10.11772/j.issn.1001-9081.2016.11.3229
Abstract505)      PDF (881KB)(459)       Save
Focusing on the problem of nonlinear time series prediction in the field of analog circuit fault prediction and the problem of error accumulation in traditional Support Vector Regression (SVR) multi-step prediction, a new adaptive SVR prediction algorithm based on phase space reconstruction was proposed. Firstly, the significance of SVR multi-step prediction method for time series trend prediction and the error accumulation problem caused by multi-step prediction were analyzed. Secondly, phase space reconstruction technique was introduced into SVR prediction, the phase space of the time series of the analog circuit state was reconstructed, and then the SVR prediction was carried out. Thirdly, on the basis of the two SVR prediction of the error accumulated sequence generated in the multi-step prediction process, the adaptive correction of the initial prediction error was realized. Finally, the proposed algorithm was simulated and verified. The simulation verification results and experimental results of the health degree prediction of the analog circuit show that the proposed algorithm can effectively reduce the error accumulation caused by multi-step prediction, and significantly improve the accuracy of regression estimation, and better predict the change trend of analog circuit state.
Reference | Related Articles | Metrics
Stock market volatility forecast based on calculation of characteristic hysteresis
YAO Hongliang, LI Daguang, LI Junzhao
Journal of Computer Applications    2015, 35 (7): 2077-2082.   DOI: 10.11772/j.issn.1001-9081.2015.07.2077
Abstract417)      PDF (869KB)(499)       Save

Focusing on the issue that the inflection points are hard to forecast in stock price volatility degrades the forecast accuracy, a kind of Lag Risk Degree Threshold Generalized Autoregressive Conditional Heteroscedastic in Mean (LRD-TGARCH-M) model was proposed. Firstly, hysteresis was defined based on the inconsistency phenomenon of stock price volatility and index volatility, and the Lag Degree (LD) calculation model was proposed through the energy volatility of the stock. Then the LD was used to measure the risk, and put into the average share price equation in order to overcome the Threshold Generalized Autoregressive Conditional Heteroscedastic in Mean (TGARCH-M) model's deficiency for predicting inflection points. Then the LD was put into the variance equation according to the drastic volatility near the inflection points, for the purpose of optimizing the change of variance and improving the forecast accuracy. Finally, the volatility forecasting formulas and accuracy analysis of the LRD-TGARCH-M algorithm were given out. The experimental results from Shanghai Stock, show that the forecast accuracy increases by 3.76% compared with the TGARCH-M model and by 3.44% compared with the Exponential Generalized Autoregressive Conditional Heteroscedastic in Mean (EGARCH-M) model, which proves the LRD-TGARCH-M model can degrade the errors in the price volatility forecast.

Reference | Related Articles | Metrics
Object detection based on visual saliency map and objectness
LI Junhao, LIU Zhi
Journal of Computer Applications    2015, 35 (12): 3560-3564.   DOI: 10.11772/j.issn.1001-9081.2015.12.3560
Abstract635)      PDF (889KB)(310)       Save
A novel salient object detection approach was proposed based on visual saliency map and objectness for detecting salient objects in images. For each input image, a number of bounding boxes with high objectness scores were exploited to estimate the rough object location, and a scheme of transferring the bounding box-level objectness score to pixel level was used to weight the input saliency map. The input saliency map and the weighted saliency map were adaptively binarized and the convex hull algorithm was used to obtain the maximum search region and the seed region, respectively. Finally, a global optimal solution was obtained by combining the edge density with the search region and seed region. The experimental results on the public MSRA-B dataset with 5000 images show that the proposed approach outperforms the maximum saliency region method, the region diversity maximization method and the objectness detection method in terms of precision, recall and F-measure.
Reference | Related Articles | Metrics
Parameters design and optimization of crosstalk cancellation system for two loudspeaker configuration
XU Chunlei LI Junfeng QIU Yuan XIA Risheng YAN Yonghong
Journal of Computer Applications    2014, 34 (5): 1503-1506.   DOI: 10.11772/j.issn.1001-9081.2014.05.1503
Abstract323)      PDF (747KB)(451)       Save

In three-dimensional sound reproduction with two speakers, Crosstalk Cancellation System (CCS) performance optimization often pay more attention to the effect independently by the factors such as inverse filter parameters design and loudspeaker configuration. A frequency-domain Least-Squares (LS) estimation approximation was proposed to use for the performance optimization. The relationship between these factors and their effect on CCS performance was evaluated systematically. To achieve the tradeoff of computing efficiency and system performance of crosstalk cancellation algorithm, this method obtained the optimization parameters. The effect of crosstalk cancellation was evaluated with Channel Separation (CS) and Performance Error (PE) index, and the simulation results indicate that these parameters can obtain good crosstalk cancellation effect.

Reference | Related Articles | Metrics
Short-term electricity load forecasting based on complementary ensemble empirical mode decomposition-fuzzy permutation and echo state network
LI Qing LI Jun MA Hao
Journal of Computer Applications    2014, 34 (12): 3651-3655.  
Abstract211)      PDF (874KB)(753)       Save

Based on Complementary Ensemble Empirical Mode Decomposition (CEEMD)-fuzzy entropy and Echo State Network (ESN) with Leaky integrator neurons (LiESN), a kind of combined forecast method was proposed for improving the precision of short-term power load forecasting. Firstly, in order to reduce the calculation scale of partial analysis for power load series and improve the accuracy of load forecasting, the power load time series was decomposed into a series of power load subsequences with obvious differences in complex degree by using CEEMD-fuzzy entropy, according to the characteristics of each subsequence, and then the corresponding LiESN forecasting submodels were built, the ultimate forecasting results could be obtained by the superposition of the forecasting model. The CEEMD-LiESN method was applied to the instance of short term electricity load forecasting of the New England region. The experimental results show that the proposed combination forecasting method has a high prediction precision.

Reference | Related Articles | Metrics
Single video temporal super-resolution reconstruction algorithm based on maximum a posterior
GUO Li LIAO Yu CHEN Weilong LIAO Honghua LI Jun XIANG Jun
Journal of Computer Applications    2014, 34 (12): 3580-3584.  
Abstract151)      PDF (823KB)(613)       Save

Any video camera equipment has certain temporal resolution, so it will cause motion blur and motion aliasing in captured video sequence. Spatial deblurring and temporal interpolation are usually adopted to solve this problem, but these methods can not solve it completely in origin. A temporal super-resolution reconstruction method based on Maximum A Posterior (MAP) probability estimation for single-video was proposed in this paper. The conditional probability model was determined in this method by reconstruction constraint, and then prior information model was established by combining temporal self-similarity in video itself. From these two models, estimation of maximum posteriori was obtained, namely reconstructed a high temporal resolution video through a single low temporal resolution video, so as to effectively remove motion blur for too long exposure time and motion aliasing for inadequate camera frame-rate. Through theoretical analysis and experiments, the validity of the proposed method is proved to be effective and efficient.

Reference | Related Articles | Metrics
Fuzzy rule extraction based on genetic algorithm
GUO Yiwen LI Jun GENG Linxiao
Journal of Computer Applications    2014, 34 (10): 2899-2903.   DOI: 10.11772/j.issn.1001-9081.2014.10.2899
Abstract227)      PDF (765KB)(313)       Save

To avoid the limitations of the traditional fuzzy rule based on Genetic Algorithm (GA), a calculation method of fuzzy control rule which contains weight coefficient was presented. GA was used to find the best weight coefficient which calculate the fuzzy rules. In this method, different weight coefficients could be provided according to different input levels, the correlation and symmetry of the weight coefficients could be used to assess all the fuzzy rules and then reduce the influence of the invalid rules. The performance comparison experiments show that the system which consists of these fuzzy rules has small overshoot, short adjustment time, and practical applications in fuzzy control. The experiments of different stimulus signals show that the system which consists of these fuzzy rules doesnt rely on stimulus signal as well as having a good tracking effect and stronger robustness.

Reference | Related Articles | Metrics
Passenger route choice behavior on transit network with real-time information at stops
ZENG Ying LI Jun ZHU Hui
Journal of Computer Applications    2013, 33 (10): 2964-2968.  
Abstract569)      PDF (835KB)(507)       Save
Along with the development of intelligent transportation information system, intelligent public transportation system is gradually popularized. Such information system is designed to provide all kinds of real-time information to transit passengers on the conditions of the network, and hence affect passengers’ travel choice behavior and improve passenger travel convenience and flexibility, so as to improve the social benefit and service level of the public transit system. Concerning the particularity of the transit network, with electronic bus stop information of Chengdu as an example, a questionnaire was designed to investigate passengers’ route choice behavior and travel intention. Qualitative and quantitative analysis and random utility theory were adopted,based on Logit model and mixed Logit model, route choice models were established, using characteristic variables of various options and passengers’ personal socio-economic attributes as explanatory variables. The method of Monte Carlo simulation and maximum likelihood were used to estimate parameters. The results indicate that the differences of route choice behavior resulting from individual preferences can be reasonably interpreted by mixed Logit model, which helps us better understand the complexity of transit behavior, so as to guide the application.
Related Articles | Metrics
Algorithm for modulation recognition based on cumulants in Rayleigh channel
ZHU Hongbo ZHANG Tianqi WANG Zhichao LI Junwei
Journal of Computer Applications    2013, 33 (10): 2765-2768.  
Abstract571)      PDF (563KB)(780)       Save
Concerning the problem of modulation identification in the Rayleigh channel, a new algorithm based on cumulants was proposed. The method was efficient and could easily classify seven kinds of signals of BPSK (Binary Phase Shift Keying), QPSK (Quadrature Phase Shift Keying), 4ASK (4-ary Amplitude Shift Keying), 16QAM (16-ary Quadrature Amplitude Modulation), 32QAM (32-ary Quadrature Amplitude Modulation), 64QAM (64-ary Quadrature Amplitude Modulation) and OFDM (Orthogonal Frequency Division Multiplexing) by using the decision tree classifier and the feature parameters that were extracted from combination of four-order cumulant and six-order cumulant. Through theoretical derivation and analysis, the algorithm is insensitive to Rayleigh fading and AWGN (Additive White Gaussian Noise). The computer simulation results show that the successful rates are over 90% when SNR (Signal-to-Noise Ratio) is higher than 4dB in Rayleigh channel, which demonstrates the feasibility and effectiveness of the proposed algorithm.
Related Articles | Metrics
Overview of complex event processing technology and its application in logistics Internet of Things
JING Xin ZHANG Jing LI Junhuai
Journal of Computer Applications    2013, 33 (07): 2026-2030.   DOI: 10.11772/j.issn.1001-9081.2013.07.2026
Abstract785)      PDF (1091KB)(587)       Save
Complex Event Processing (CEP) is currently an advanced analytical technology which deals with high velocity event streams in a real-time way and primarily gets applied in Event Driven Architecture (EDA) system domain. It is helpful to realize intelligent business in many applications. For the sake of reporting its research status, the paper introduced the basic meaning and salient feature of CEP, and proposed a system architecture model composed of nine parts. Afterwards, the main constituents of the model were reviewed in terms of key technology and its formalization. In order to illustrate how to use CEP in the logistic Internet of things, an application framework with CEP infiltrating in it was also proposed here. It can be concluded that CEP has many merits and can play an important role in application fields. Finally, the shortcomings of this research domain were pointed out and future works were discussed. The paper systematically analyzed the CEP technology in terms of theory and practice so as to further develop CEP technology.
Reference | Related Articles | Metrics
Particle swarm optimization algorithm with fast convergence and adaptive escape
SHI Xiaolu SUN Hui LI Jun ZHU Degang
Journal of Computer Applications    2013, 33 (05): 1308-1312.   DOI: 10.3724/SP.J.1087.2013.01308
Abstract814)      PDF (722KB)(572)       Save
In order to overcome the drawbacks of Particle Swarm Optimization (PSO) that converges slowly at the last stage and easily falls into local minima, this paper proposed a new PSO algorithm with convergence acceleration and adaptive escape (FAPSO) inspired by the Artificial Bee Colony (ABC) algorithm. For each particle, FAPSO conducted two search operations. One was global search and the other was local search. When a particle got stuck, the adaptive escape operator was used to search the particle again. Experiments were conducted on eight classical benchmark functions. The simulation results demonstrate that the proposed approach improves the convergence rate and solution accuracy, when compared with some recently proposed PSO versions, such as CLPSO. Besides, the results of t-test show clear superiority.
Reference | Related Articles | Metrics
Transit assignment based on stochastic user equilibrium with passengers' perception consideration
ZENG Ying LI Jun ZHU Hui
Journal of Computer Applications    2013, 33 (04): 1149-1152.   DOI: 10.3724/SP.J.1087.2013.01149
Abstract704)      PDF (763KB)(549)       Save
Concerning the special nature of the transit network, the generalized path concept that maybe easily describe passenger route choice behavior was put forward. The key cost of each path was considered. Based on the analytical framework of cumulative prospect theory and passengers' perception, a stochastic user equilibrium assignment model was developed. A simple example revealed that the limitations of the traditional method can be effectively improved by this proposed method. The basic assumption of complete rationality in traditional model was improved. It helped us enhance our understanding of the complexity of urban public transportation behavior and the rule of decision-making. The facility layout and planning of the public transportation can be determined with this result, as well as the evaluation of the level of service. In addition, it can also be used as valid data support for traffic guidance.
Reference | Related Articles | Metrics
Collaborative recommendation method improvement based on social network analysis
FENG Yong LI Junping XU Hongyan DANG Xiaowan
Journal of Computer Applications    2013, 33 (03): 841-844.   DOI: 10.3724/SP.J.1087.2013.00841
Abstract827)      PDF (641KB)(774)       Save
Collaborative recommendation is widely used in E-commerce personalized service. But the existing methods cannot provide high level personalized service due to sparse data and cold start. To improve the accuracy of collaborative recommendation, a collaborative recommendation method based on Social Network Analysis (SNA) was proposed in this paper by using SNA to improve the collaborative recommendation methods. The proposed method used SNA technology to analyze the trust relationships between users, then quantified the relationships as trust values to fill the user-item matrix, and used these trust values to calculate the similarity of users. The effectiveness of the proposed method was verified by the experimental analysis. Using trust values to expand the user-item matrix can not only solve the problem of sparse data and cold start effectively, but also improve the accuracy of collaborative recommendation.
Reference | Related Articles | Metrics
Method of Deep Web entities identification based on BP neural network
XU Hongyan DANG Xiaowan FENG Yong LI Junping
Journal of Computer Applications    2013, 33 (03): 776-779.   DOI: 10.3724/SP.J.1087.2013.00776
Abstract766)      PDF (635KB)(449)       Save
To solve the problems such as low level automation and poor adaptability of current entity recognition methods, a Deep Web entity recognition method based on Back Propagation (BP) neural network was proposed in this paper. The method divided the entities into blocks first, then used the similarity of semantic blocks as the input of BP neural network, lastly obtained a correct entity recognition model by training which was based on the autonomic learning ability of BP neural network. It can achieve entity recognition automation in heterogeneous data sources. The experimental results show that the application of the method can not only reduce manual interventions, but also improve the efficiency and the accuracy rate of entity recognition.
Reference | Related Articles | Metrics
DPST: a scheduling algorithm of preventing slow task thrashing in heterogeneous environment
DUAN Han-cong LI Jun-jie CHEN Cheng LI Lin
Journal of Computer Applications    2012, 32 (07): 1910-1912.   DOI: 10.3724/SP.J.1087.2012.01910
Abstract958)      PDF (617KB)(625)       Save
With regard to the thrashing problem of load-balancing algorithm in heterogeneous environments, a new scheduling algorithm called Dynamic Predetermination of Slow Task (DPST) was designed to reduce the probability in slow task scheduling and improve load-balancing. Through defining capability measure of heterogeneous task in heterogeneous nodes, the capacity of nodes which performed heterogeneous tasks was normalized. With the introduction of predetermination, thrashing result from heterogeneous environments was reduced. By using double queues of slow task and slow node, the efficiency of scheduling was improved. The experimental results show that the thrashing times in heterogeneous environments fell by more than 40% compared with Hadoop. Because thrashing times have been reduced effectively, DPST algorithm has better performance in reducing average response time and increasing system throughput in heterogeneous environments.
Reference | Related Articles | Metrics
Quartic Hermite interpolating splines with parameters
LI Jun-cheng LIU Chun-ying YANG Lian
Journal of Computer Applications    2012, 32 (07): 1868-1870.   DOI: 10.3724/SP.J.1087.2012.01868
Abstract1448)      PDF (591KB)(757)       Save
To overcome the defects of the standard cubic Hermite interpolating splines, a class of quartic Hermite interpolating splines with parameters was presented in this paper, which inherited the same properties of the standard cubic Hermite interpolating splines. Given the set interpolating conditions, the shape of the proposed splines could be adjusted by changing the values of the parameters. If the parameters were chosen properly, the quartic Hermite interpolating splines could achieve C2 continuity and approximate to the interpolated functions better than the standard cubic Hermite interpolating splines. The proposed new splines further enriched the theories of Hermite interpolating splines, and provided a new method for constructing interpolation curves and surfaces.
Reference | Related Articles | Metrics
Network congestion status prediction with multidimensional statistical methods
WU Ping WU Bin LI Xin LI Jun HUANG Hong-wei
Journal of Computer Applications    2012, 32 (05): 1251-1254.  
Abstract779)      PDF (1824KB)(719)       Save
For evaluating the two values of the average queue length and the queue waiting time, the core indicators of congestion control algorithms, more accurately in the network with priority scheduling service, a computational model, which includes the data arriving process, the data leaving process and the priority scheduling service, was designed by using the three statistical methods: the Pareto distribution, the Poisson random process and the weighted average method. And the computational function of curve shape parameter was deduced by using the matrix method. By comparing the simulation results produced from a test bed with the results of the computational model, it is found that the deviation is small, which proves that the new model can predict the status of network correctly.
Reference | Related Articles | Metrics
Improved fuzzy C-means clustering algorithm based on distance correction
LOU Xiao-jun LI Jun-ying LIU Hai-tao
Journal of Computer Applications    2012, 32 (03): 646-648.   DOI: 10.3724/SP.J.1087.2012.00646
Abstract1283)      PDF (446KB)(600)       Save
Based on Euclidean distance, the classic Fuzzy C-Means (FCM) clustering algorithm has the limitation of equal partition trend for data sets. And the clustering accuracy is lower when the distribution of data points is not spherical. To solve these problems, a distance correction factor based on dot density was introduced. Then a distance matrix with this factor was built for measuring the differences between data points. Finally, the new matrix was applied to modify the classic FCM algorithm. Two sets of experiments using artificial data and UCI data were operated, and the results show that the proposed algorithm is suitable for non-spherical data sets and outperforms the classic FCM algorithm in clustering accuracy.
Reference | Related Articles | Metrics
Image median filtering algorithm based on grey absolute relation
YANG Fang-fang ZHANG You-hui WANG Zhi-wei LI Jun-hong DONG Rui
Journal of Computer Applications    2011, 31 (12): 3357-3359.  
Abstract1044)      PDF (669KB)(619)       Save
This paper integrated the characteristics of the grey absolute relation with the advantages of the median filter to combine the pixels within the n×n template into two sequences, where n is an odd number that is greater than or equal to 3. Then, the characteristics of the grey absolute relation were used to determine the similarity between the two sequences. Finally, the degree of similarity was adopted to determine whether the current pixel is noise or not, and then the value of median filter was used to replace the noise one. The experimental results show that this algorithm has better filtering effect than the standard median filter method and other filtering methods while keepings more details of the original image.
Related Articles | Metrics
Applications of molecular-kinetic-theory-based clustering approach on gene expression data
LI Jun-lin FU Hong-guang
Journal of Computer Applications    2011, 31 (10): 2774-2777.   DOI: 10.3724/SP.J.1087.2011.02774
Abstract1330)      PDF (653KB)(614)       Save
In order to find possible diagnostic genes that may typically assist in disease diagnosis, clustering technologies are always used to analyze gene expression data. Molecular-kinetic-theory-based clustering approach is a new and effective clustering technique. It finds data clusters by following the molecular kinetic mechanism. This dynamic clustering approach does not require presetting the number of clusters and can be used to estimate the number of clusters. The authors applied the method on gene expression data to estimate the number of clusters and possible diagnostic genes according to relevant clustering criteria. The simulation results and analysis verify the good knowledge discovery ability of this approach.
Related Articles | Metrics